31 research outputs found

    Specification-driven test generation for model transformations

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-30476-7_3Proceedings of 5th International Conference, ICMT 2012, Prague, Czech Republic, May 28-29, 2012Testing model transformations poses several challenges, among them the automatic generation of appropriate input test models and the specification of oracle functions. Most approaches to the generation of input models ensure a certain level of source meta-model coverage, whereas the oracle functions are frequently defined using query or graph languages. Both tasks are usually performed independently regardless their common purpose, and sometimes there is a gap between the properties exhibited by the generated input models and those demanded to the transformations (as given by the oracles). Recently, we proposed a formal specification language for the declarative formulation of transformation properties (invariants, pre- and postconditions) from which we generated partial oracle functions that facilitate testing of the transformations. Here we extend the usage of our specification language for the automated generation of input test models by constraint solving. The testing process becomes more intentional because the generated models ensure a certain coverage of the interesting properties of the transformation. Moreover, we use the same specification to consistently derive both the input test models and the oracle functions.Work funded by the Spanish Ministry of Economy and Competitivity (TIN2011-24139) and by the R&D programme of Madrid Region (S2009/TIC-1650

    Contracts for Model Execution Verification

    Get PDF
    International audienceOne of the main goals of model-driven engineering is the manipulation of models as exclusive software artifacts. Model execution is in particular a means to substitute models for code. We focus in this paper on verifying model executions. We use a contract-based approach to specify an execution semantics for a meta-model. We show that an execution semantics is a seamless extension of a rigorous meta-model specification and is composed of complementary levels, from static element definition to dynamic elements, execution specifications as well. We use model transformation contracts for controlling the dynamic consistent evolution of a model during its execution. As an illustration, we apply our approach to UML state machines using OCL as the contract expression language

    Software Testing Techniques Revisited for OWL Ontologies

    Get PDF
    Ontologies are an essential component of semantic knowledge bases and applications, and nowadays they are used in a plethora of domains. Despite the maturity of ontology languages, support tools and engineering techniques, the testing and validation of ontologies is a field which still lacks consolidated approaches and tools. This paper attempts at partly bridging that gap, taking a first step towards the extension of some traditional software testing techniques to ontologies expressed in a widely-used format. Mutation testing and coverage testing, revisited in the light of the peculiar features of the ontology language and structure, can can assist in designing better test suites to validate them, and overall help in the engineering and refinement of ontologies and software based on them

    Automatic Model Generation Strategies for Model Transformation Testing

    Get PDF
    Abstract. Testing model transformations requires input models which are graphs of inter-connected objects that must conform to a meta-model and meta-constraints from heterogeneous sources such as well-formedness rules, transformation preconditions, and test strategies. Manually specifying such models is tedious since models must simultaneously conform to several meta-constraints. We propose automatic model generation via constraint satisfaction using our tool Cartier for model transformation testing. Due to the virtually infinite number of models in the input domain we compare strategies based on input domain partitioning to guide model generation. We qualify the effectiveness of these strategies by performing mutation analysis on the transformation using generated sets of models. The test sets obtained using partitioning strategies gives mutation scores of up to 87 % vs. 72 % in the case of unguided/random generation. These scores are based on analysis of 360 automatically generated test models for the representative transformation of UML class diagram models to RDBMS models.
    corecore